-
Notifications
You must be signed in to change notification settings - Fork 911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wait for handover state update before serving the requests #7028
base: main
Are you sure you want to change the base?
Conversation
err := ni.checkReplicationState(nsEntry, info.FullMethod) | ||
if err != nil { | ||
var nsErr error | ||
nsEntry, nsErr = ni.namespaceRegistry.RefreshNamespaceById(nsEntry.ID()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm this will keep refreshing the namespace? I mean all requests will perform a refresh upon retry.
also if the handover error is from history service then the refresh here is unnecessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense. We can also just wait for ns refresh not do this proactive refresh.
return err | ||
} | ||
return ni.checkReplicationState(namespaceEntry, fullMethod) | ||
// Only retry on ErrNamespaceHandover, other errors will be handled by the retry interceptor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we just let retry interceptor handle the error? and this interceptor just block until ns is no longer in handover state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not go with this because retry interceptor should be the most inner interceptor. If that is the case, can I add a handover interceptor after the retry interceptor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't remember why we moved retry interceptor to be the most inner interceptor, but I think there must be a reason. I think we should fully understand that to make sure the retry logic here won't cause any issue.
can I add a handover interceptor after the retry interceptor?
hmmm then why not just retry handover error in retry interceptor?
I guess I was proposing something different in my comment, i.e. block on ns state change instead of retry (by registering a callback to namespace registry).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot wait for ns callback as the error might return from history nodes and there is no callback from frontend.
common/rpc/interceptor/telemetry.go
Outdated
@@ -215,10 +215,14 @@ func (ti *TelemetryInterceptor) RecordLatencyMetrics(ctx context.Context, startT | |||
userLatencyDuration = time.Duration(val) | |||
metrics.ServiceLatencyUserLatency.With(metricsHandler).Record(userLatencyDuration) | |||
} | |||
handoverRetryLatency := time.Duration(0) | |||
if val, ok := metrics.ContextCounterGet(ctx, metrics.NamespaceHandoverRetryLatency.Name()); ok { | |||
handoverRetryLatency = time.Duration(val) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we don't actually emit metric for this latency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow the command to exclude the backoff retry delay from the no user latency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to chat about this.
response, err = handler(ctx, req) | ||
if retryCount > 1 { | ||
retryLatency = retryLatency + handoverRetryPolicy.ComputeNextDelay(0, retryCount, nil) | ||
} | ||
retryCount++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a bit hard to follow as it's calculating the retryLatency before the current attempt, but the code lives after the handler call...
return err | ||
} | ||
return ni.checkReplicationState(namespaceEntry, fullMethod) | ||
// Only retry on ErrNamespaceHandover, other errors will be handled by the retry interceptor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't remember why we moved retry interceptor to be the most inner interceptor, but I think there must be a reason. I think we should fully understand that to make sure the retry logic here won't cause any issue.
can I add a handover interceptor after the retry interceptor?
hmmm then why not just retry handover error in retry interceptor?
I guess I was proposing something different in my comment, i.e. block on ns state change instead of retry (by registering a callback to namespace registry).
if namespaceData.ReplicationState() == enumspb.REPLICATION_STATE_HANDOVER { | ||
cbID := uuid.New() | ||
waitReplicationStateUpdate := make(chan struct{}) | ||
i.namespaceRegistry.RegisterStateChangeCallback(cbID, func(ns *namespace.Namespace, deletedFromDb bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably should be change the impl to listen on changes for a specific namespace...
Can be done in a later PR.
if i.enabledForNS(namespaceName.String()) { | ||
startTime := i.timeSource.Now() | ||
defer func() { | ||
metrics.HandoverWaitLatency.With(i.metricsHandler).Record(time.Since(startTime)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we only want to emit the metric when namespace actually waited on the handover state.
@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider( | |||
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor | |||
metrics.NewServerMetricsContextInjectorInterceptor(), | |||
authInterceptor.Intercept, | |||
namespaceHandoverInterceptor.Intercept, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some comments explaining why it must be in this place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the handover state checking logic in StateValidationIntercept removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not remove in this PR as I added the feature flag. Once we can confident we can remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to disabled that the check in StateValidationIntercept if the feature flag is enabled?
@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider( | |||
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor | |||
metrics.NewServerMetricsContextInjectorInterceptor(), | |||
authInterceptor.Intercept, | |||
namespaceHandoverInterceptor.Intercept, | |||
redirectionInterceptor.Intercept, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess theoretically we can retry in redirection Interceptor? since it has the knowledge of whether redirect will actually happen or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess so but the logic could be weird. What should we do if the redirect will not happen?
case <-waitReplicationStateUpdate: | ||
} | ||
i.namespaceRegistry.UnregisterStateChangeCallback(cbID) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we don't need this check?
@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider( | |||
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor | |||
metrics.NewServerMetricsContextInjectorInterceptor(), | |||
authInterceptor.Intercept, | |||
namespaceHandoverInterceptor.Intercept, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to disabled that the check in StateValidationIntercept if the feature flag is enabled?
methodName string, | ||
) (waitTime *time.Duration, retErr error) { | ||
if _, ok := allowedMethodsDuringHandover[methodName]; ok { | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: returning nil, nil
is surprising I think for the caller.
What changed?
Wait for handover state update before serving the requests.
Why?
We need to wait on handover state update before the redirect interceptor. It will serve the request once the replication state is updated and routes to the correct endpoint.
How did you test it?
TODO
Potential risks
Documentation
Is hotfix candidate?